Inductive Representation Learning on Large Graphs

这是一篇发表在NIPS2017上的工作,提出了一个模型叫做GraphSAGE,用来解决图表示的。提出了一个问题
However, most existing approaches require that all nodes in the graph are present during training of the embeddings;
作者把这些模型叫做transductive的,而他提出来的则叫做inductive的框架。这样可以处理unseen的数据

Low-dimensional vector embeddings of nodes in large graphs1 have proved extremely useful as feature inputs for a wide variety of prediction and graph analysis tasks [5, 11, 28, 35, 36]. The basic idea behind node embedding approaches is to use dimensionality reduction techniques to distill the high-dimensional information about a node’s graph neighborhood into a dense vector embedding.

朴素的GCN是属于transductive的,现在作者要把它扩展到inductive上去,同时提出一个可训练的aggregation函数来解决这个问题。

这里提到了有一个工作是把图的信息当作正则项。
The graph-based loss function encourages nearby nodes to have similar representations, while enforcing that the representations of disparate nodes are highly distinct

分享到